This file describes the work and results of the fourth week a. k. a. “Clustering and classification” of the IDOS2023 course.
The Boston data set from the MASS package in R consists of information on different characteristics for suburbs in Boston, Massachusetts, US. Variables include, amongst others:
The data set contains 506 rows (observations) and 14 columns (variables), of which all are numerical values and none are characters. chas is a binary integer, and rad an integer number. More information on the data set, its variables the abbreviations can be found here.
# access all packages needed in this chunk
library(MASS)
library(dplyr)
library(tidyverse)
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
The black-and-white plot matrix below shows that numerous variables are grouped in two groups, e. g., chas (binary), rad, and tax. crim and zn indicate that many observations are 0 (i. e., no crimes and no large residential home, respectively). Most of the variables are not normally distributed: the proportion of owner-occupied units built prior to 1940 (age) is high, as well as the amount of black people living in the suburbs (black).
# access all packages needed in this chunk
library(dplyr)
library(tidyverse)
# show summaries of variables
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
# plot matrix of the variables
pairs(Boston)
# histograms
Boston %>%
gather() %>%
ggplot(aes(x=value)) + geom_histogram(binwidth = 1) + facet_wrap('key', scales='free')
When we look at the coloured correlation matrix below, we can see the correlations between the variables more clearly. Big circles show a strong correlation, whereas small show a weak or no correlation, also noticeable with faint colour. The blue and red circles indicate a positive and negative correlation, respectively. A very strong positive correlation can, e. g., be seen between rad and tax. A strong negative collrelation can, e. g., be seen between age and dis (weighted mean of distances to five Boston employment centers).
# access all packages needed in this chunk
library(corrplot)
library(dplyr)
# calculate the correlation matrix and round it
cor_matrix <- cor(Boston) %>%
round(digits = 2)
# visualise the correlation matrix
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
To perform a linear discriminant analysis, it is necessary to scale the data. For this, we subtract the column means from the corresponding columns and divide the difference with the standard deviation:
\[scaled(x) = \frac{x - mean(x)}{ sd(x)}\]
When we look at the scaled data, we can see in the summary that the mean of all variables equals 0. Similarly, the standard deviation is 1 for all variables (not seen for all variables).
# center and standardize variables
boston_scaled <- Boston %>%
scale()
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)
# change crim to numeric
boston_scaled$crim <- as.numeric(boston_scaled$crim)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
sd(boston_scaled$zn)
## [1] 1
sd(boston_scaled$age)
## [1] 1
# create a quantile vector of crim
bins <- quantile(boston_scaled$crim)
# create a categorical variable 'crime'
labels <- c("low", "med_low", "med_high", "high")
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = labels)
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
# divide data set in test and train
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
In order to then predict what might happen in Boston’s suburbs in the future, we need to know how well the model we will use works. For this, we split the original data set into a train (80% of the data) and test set (20 % of the data). We can then train the model with the train set and predict with the test set.
Linear discriminant analysis is a statistical method that tries to find linear combinations of explanatory variables and group them in differences that are as large as possible. It weighs the explanatory variables (predictors), creates functions out of it (so-called linear discriminant functions, i. e., LD1, LD2, LD3, see below) and distinguishes them as much as possible.
From the summary below, we can see that based on the training data, 25 % of the data set belongs to the low group, 25 % to med_low, 24% to med_high and 26% to high, respectively (“Prior probabilities of groups”). The proportion of trace shows the between-class variance in the different linear discriminant functions. Hence, 96.5% of the between-class variance is explained by the first linear discriminant function (LD1). The coefficients (of linear discriminants) indicate that rad (index of accessibility to radial highways) is very well represented in LD1 (4.06) compared to all other variables ranging around 0.
# crime = target variable, . = all other (explanatory) variables
lda.fit <- lda(crime ~ ., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2450495 0.2599010 0.2549505 0.2400990
##
## Group means:
## zn indus chas nox rm age
## low 1.0236375 -0.8809087 -0.11325431 -0.8771014 0.47659037 -0.9101738
## med_low -0.0613273 -0.2808293 -0.04735191 -0.5657921 -0.14920556 -0.3847114
## med_high -0.3773415 0.1369510 0.10991367 0.3754720 0.02257383 0.4077784
## high -0.4872402 1.0172187 -0.06938576 1.0807325 -0.38575268 0.8329921
## dis rad tax ptratio black lstat
## low 0.8963131 -0.7022949 -0.7309770 -0.4199038 0.38384680 -0.79790823
## med_low 0.4097937 -0.5465475 -0.4689599 -0.0938373 0.31530660 -0.11097675
## med_high -0.3555404 -0.4778837 -0.3753272 -0.2310422 0.09349783 0.08541398
## high -0.8646579 1.6371072 1.5133254 0.7795879 -0.89969391 0.88844810
## medv
## low 0.52426710
## med_low -0.01992829
## med_high 0.11878048
## high -0.71288936
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.133205224 0.64900625 -1.01431751
## indus 0.070754970 -0.05719098 0.37797806
## chas -0.010054618 -0.04506524 0.07030253
## nox 0.363603794 -0.82014141 -1.33462185
## rm -0.007919595 0.02345546 -0.21043298
## age 0.136206271 -0.34683018 -0.16615273
## dis -0.157782407 -0.20550217 0.31225633
## rad 4.076520846 0.82805399 0.01214474
## tax -0.042964751 0.14936312 0.57566001
## ptratio 0.149970917 -0.05546500 -0.43675645
## black -0.092242423 0.07878134 0.13222367
## lstat 0.187608813 -0.28200933 0.39866455
## medv 0.074687414 -0.47296807 -0.10859838
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9627 0.0273 0.0100
# load function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
graphics::arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results (select both lines and execute them at the same time!)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
This is confirmed by looking at the plot, where rad seems to be the only variable strongly influencing LD1. We can also see the groups of observations vary a lot in LD1 (x-axis), especially the high group clustered on the other end. LD2 (y-axis) does not show a discriminative power, so does not capture / group the differences in the explanatory variables well.
After training the model, we can now predict classes with the LDA model on the test data. If we look at the categorical accuracy, we can see that the accuracy of the predictions for high is highest, followed by med_low, low, and med_high (95%, 64%, 60%, and 42%, respectively). This is also evident in the cross-tabulation. Most high values were correctly predicted, only 1 was wrongly predicted as med_high.
# save the correct classes from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
conf <- table(correct = correct_classes, predicted = lda.pred$class)
conf
## predicted
## correct low med_low med_high high
## low 16 12 0 0
## med_low 2 10 9 0
## med_high 0 2 17 4
## high 0 0 0 30
# calculate precision
diag(conf) / rowSums(conf)
## low med_low med_high high
## 0.5714286 0.4761905 0.7391304 1.0000000
To state whether objects are similar to one another or not, we can also measure distances. The most common distance measure is the Euclidean distance, which is the length of a straight line (distance) between two points and its x and y coordinate.
K-means clustering is a commonly used clustering method to assign observations to groups (a. k. a. clusters) based on how similar they are.
# reload Boston data set
library(MASS)
data("Boston")
# standardise data set
boston_scaled <- as.data.frame(scale(Boston))
boston_scaled$crim <- as.numeric(boston_scaled$crim)
# euclidean distance matrix
dist_eu <- dist(boston_scaled)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
# k-means clustering with 2 clusters
km <- kmeans(boston_scaled, centers = 2)
pairs(boston_scaled, col = km$cluster)
# k-means clustering with 3 clusters
km <- kmeans(boston_scaled, centers = 3)
pairs(boston_scaled, col = km$cluster)
# k-means clustering with 4 clusters
km <- kmeans(boston_scaled, centers = 4)
pairs(boston_scaled, col = km$cluster)